翻訳と辞書
Words near each other
・ Subgenomic mRNA
・ Subgenual organ
・ Subgenus
・ Subgiant
・ Subgiant (band)
・ Subglacial channel
・ Subglacial eruption
・ Subglacial lake
・ Subglacial mound
・ Subglacial stream
・ Subglacial volcano
・ Subglottic stenosis
・ Subglottis
・ Subgoal labeling
・ Subgrade
Subgradient method
・ Subgrain rotation recrystallization
・ Subgranular zone
・ Subgraph
・ Subgraph isomorphism problem
・ Subgroup
・ Subgroup analysis
・ Subgroup growth
・ Subgroup method
・ Subgroup series
・ Subgroup test
・ Subgrouping
・ Subgroups of Amish
・ Subgroups of cyclic groups
・ Subgum


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Subgradient method : ウィキペディア英語版
Subgradient method
Subgradient methods are iterative methods for solving convex minimization problems. Originally developed by Naum Z. Shor and others in the 1960s and 1970s, subgradient methods are convergent when applied even to a non-differentiable objective function. When the objective function is differentiable, subgradient methods for unconstrained problems use the same search direction as the method of steepest descent.
Subgradient methods are slower than Newton's method when applied to minimize twice continuously differentiable convex functions. However, Newton's method fails to converge on problems that have non-differentiable kinks.
In recent years, some interior-point methods have been suggested for convex minimization problems, but subgradient projection methods and related bundle methods of descent remain competitive. For convex minimization problems with very large number of dimensions, subgradient-projection methods are suitable, because they require little storage.
Subgradient projection methods are often applied to large-scale problems with decomposition techniques. Such decomposition methods often allow a simple distributed method for a problem.
==Classical subgradient rules==

Let f:\mathbb^n \to \mathbb be a convex function with domain \mathbb^n. A classical subgradient method iterates
:x^ = x^ - \alpha_k g^ \
where g^ denotes a subgradient of f \ at x^ \ , and x^ is the k^ iterate of x. If f \ is differentiable, then its only subgradient is the gradient vector \nabla f itself.
It may happen that -g^ is not a descent direction for f \ at x^. We therefore maintain a list f_}^ = \min\ , f(x^) \}.
which is resultant convex optimized.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Subgradient method」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.